Accepted Papers
December 14, 2024 - West Meeting Room 111-112
Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7
Pre-registration form: https://forms.gle/YBCwn7L8N5AxExMG7
Evaluating and Mitigating Discrimination in Language Model Decisions
Alex Tamkin · Amanda Askell · Liane Lovitt · Esin DURMUS · Nicholas Joseph · Shauna Kravec · Karina Nguyen · Jared Kaplan · Deep Ganguli
Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation
To Eun Kim · Fernando Diaz
Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML
Prakhar Ganesh · Usman Gohar · Lu Cheng · Golnoosh Farnadi
Evaluating Gender Bias Transfer between Pre-trained and Prompt Adapted Language Models
Nivedha Sivakumar · Natalie Mackraz · Samira Khorshidi · Krishna Patel · Barry-John Theobald · Luca Zappella · Nicholas Apostoloff
The Search for Less Discriminatory Algorithms: Limits and Opportunities
Benjamin Laufer · Manish Raghavan · Solon Barocas
M²FGB: A Min-Max Gradient Boosting Framework for Subgroup Fairness
Jansen Pereira · Giovani Valdrighi · Marcos M. Raimundo
Benchmark to Audit LLM Generated Clinical Notes for Disparities Arising from Biases and Stereotypes
Hongyu Cai · Swetasudha Panda · Naveen Jafer Nizar · Qinlan Shen · Daeja Oxendine · Sumana Srivatsa · Krishnaram Kenthapadi
Mitigating Bias in Facial Recognition Systems: Centroid Fairness Loss Optimization
Jean-Rémy Conti · Stephan Clémençon
Measuring Representational Harms in Image Generation with a Multi-Group Proportional Metric
Sangwon Jung · Claudio Mayrink Verdun · Alex Oesterling · Sajani Vithana · Taesup Moon · Flavio du Pin Calmon
Efficient Fairness-Performance Pareto Front Computation
Mark Kozdoba · Benny Perets · Shie Mannor
The Search for Less Discriminatory Algorithms: Limits and Opportunities
Benjamin Laufer · Manish Raghavan · Solon Barocas
From Models to Systems: A Comprehensive Fairness Framework for Compositional Recommender Systems
Brian Hsu · Cyrus DiCiccio · Natesh Pillai · Hongseok Namkoong
Understanding The Effect Of Temperature On Alignment With Human Opinions
Maja Pavlovic · Massimo Poesio
Towards Reliable Fairness Assessments of Multi-Label Image Classifiers
Melissa Hall · Bobbie Chern · Laura Gustafson · Denisse Ventura · Harshad Kulkarni · Candace Ross · Nicolas Usunier
Toward Large Language Models that Benefit for All: Benchmarking Group Fairness in Reward Models
Kefan Song · Jin Yao · Shangtong Zhang
Better Bias Benchmarking of Language Models via Multi-factor Analysis
Hannah Powers · Ioana Baldini · Dennis Wei · Kristin P Bennett
Adaptive Group Robust Ensemble Knowledge Distillation
Patrik Joslin Kenfack · Ulrich Aïvodji · Samira Ebrahimi Kahou
Evaluating Gender Bias Transfer between Pre-trained and Prompt Adapted Language Models
Nivedha Sivakumar · Natalie Mackraz · Samira Khorshidi · Krishna Patel · Barry-John Theobald · Luca Zappella · Nicholas Apostoloff
The Intersectionality Problem for Algorithmic Fairness
Johannes Himmelreich · Arbie Hsu · Kristian Lum · Ellen Veomett
Counterpart Fairness – Addressing Systematic Between-Group Differences in Fairness Evaluation
Yifei Wang · Zhengyang Zhou · Liqin Wang · John Laurentiev · Peter Hou · Li Zhou · Pengyu Hong
Towards Fair RAG: On the Impact of Fair Ranking in Retrieval-Augmented Generation
To Eun Kim · Fernando Diaz
Multi-Output Distributional Fairness via Post-Processing
Gang Li · Qihang Lin · Ayush Ghosh · Tianbao Yang
Fair Summarization: Bridging Quality and Diversity in Extractive Summaries
Sina Bagheri Nezhad · Sayan Bandyapadhyay · Ameeta Agrawal
Different Horses for Different Courses: Comparing Bias Mitigation Algorithms in ML
Prakhar Ganesh · Usman Gohar · Lu Cheng · Golnoosh Farnadi
Improving Bias Metrics in Vision-Language Models by Addressing Inherent Model Disabilities
Lakshmipathi Balaji Darur · Shanmukha Sai Keerthi Gouravarapu · Shashwat Goel · Ponnurangam Kumaraguru
Beyond Internal Data: Constructing Complete Datasets for Fairness Testing
Varsha Ramineni · Hossein A. Rahmani · Emine Yilmaz · David Barber
Optimal Selection Using Algorithmic Rankings with Side Information
Kate Donahue · Nicole Immorlica · Brendan Lucier
Verifiable evaluations of machine learning models using zkSNARKs
Tobin South · Alexander Camuto · Shrey Jain · Robert Mahari · Christian Paquin · Jason Morton · Alex `Sandy' Pentland
Imitation Guided Automated Red Teaming
Sajad Mousavi · Desik Rengarajan · Ashwin Ramesh Babu · Vineet Gundecha · Soumyendu Sarkar
Measuring the Impact of Equal Treatment as Blindness via Explanations Disparity
Carlos Mougan · Salvatore Ruggieri · Laura State · Antonio Ferrara · Steffen Staab
Multilingual Hallucination Gaps in Large Language Models
Cléa Chataigner · Afaf Taik · Golnoosh Farnadi
Evaluating and Mitigating Discrimination in Language Model Decisions
Alex Tamkin · Amanda Askell · Liane Lovitt · Esin DURMUS · Nicholas Joseph · Shauna Kravec · Karina Nguyen · Jared Kaplan · Deep Ganguli
Fairness-Enhancing Data Augmentation Methods for Worst-Group Accuracy
Monica Welfert · Nathan Stromberg · Lalitha Sankar
Exploring AUC-like metrics to propose threshold-independent fairness evaluation
Daniel Gratti · Thalita Veronese · Marcos M. Raimundo
Abdoul Jalil Djiberou Mahamadou · Lea Goetz
Improving Fairness in Matching under Uncertainty
Piyushi Manupriya
LLMs Infer Protected Attributes Beyond Proxy Features
Dimitri Staufer
On Optimal Subgroups for Group Distributionally Robust Optimisation
Anissa Alloula · Daniel McGowan · Bartlomiej W. Papiez
Demographic (Mis)Alignment of LLMs' Perception of Offensiveness
Shayan Alipour · Indira Sen · Preetam Prabhu Srikar Dammu · Chris Choi · Mattia Samory · Tanu Mitra
Towards Better Fairness Metrics for Counter-Human Trafficking AI Initiatives
Vidya Sujaya · Pratheeksha Nair · Reihaneh Rabbany
Q-Morality: Quantum-Enhanced ActAdd-Guided Bias Reduction in LLMs
Shardul Kulkarni
What's in a Query: Examining Distribution-based Amortized Fair Ranking
Aparna Balagopalan · Kai Wang · Asia Biega · Marzyeh Ghassemi